Goto

Collaborating Authors

 independent algorithm


Reconstruction of gene regulatory network via sparse optimization

Lou, Jiashu, Cui, Leyi, Qiu, Wenxuan

arXiv.org Artificial Intelligence

For certain diseases or traits, genes play a crucial role in their expression. And the interactions between genes, i.e., gene regulatory networks (GRNs), have become a recent research hotspot. With the availability of high-throughput gene expression data and the substantial increase in arithmetic power, it is possible to reconstruct large-scale gene regulatory networks[1]. Due to the nature of gene expression data, providing information about the abundance of mRNAs only rather than binding information, gene regulatory networks defined in the above sense provide information about regulatory interactions between regulators and their potential targets; genegene interactions, and potential protein-protein interactions. In this paper, we refer to the network inferred in this way as a gene regulatory network[32]. In gene regulatory networks, genes can be divided into two categories. Transcription factors (TF), also known as trans-acting factors, are DNA-binding proteins that specifically interact with the cis-acting elements of eukaryotic genes and have an activating or inhibiting effect on gene transcription. The gene that receives this activation or repression is referred to as the target gene. It is important to note that transcription factors themselves may also be target genes, i.e., there may be mutual regulation in the regulatory network.


Investigation of Independent Reinforcement Learning Algorithms in Multi-Agent Environments

Lee, Ken Ming, Subramanian, Sriram Ganapathi, Crowley, Mark

arXiv.org Artificial Intelligence

Independent reinforcement learning algorithms have no theoretical guarantees for finding the best policy in multi-agent settings. However, in practice, prior works have reported good performance with independent algorithms in some domains and bad performance in others. Moreover, a comprehensive study of the strengths and weaknesses of independent algorithms is lacking in the literature. In this paper, we carry out an empirical comparison of the performance of independent algorithms on four PettingZoo environments that span the three main categories of multi-agent environments, i.e., cooperative, competitive, and mixed. We show that in fully-observable environments, independent algorithms can perform on par with multi-agent algorithms in cooperative and competitive settings. For the mixed environments, we show that agents trained via independent algorithms learn to perform well individually, but fail to learn to cooperate with allies and compete with enemies. We also show that adding recurrence improves the learning of independent algorithms in cooperative partially observable environments.